Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 129
Filter
1.
Sensors (Basel) ; 24(5)2024 Feb 22.
Article in English | MEDLINE | ID: mdl-38474939

ABSTRACT

The integration of sensor technology in healthcare has become crucial for disease diagnosis and treatment [...].


Subject(s)
Biomedical Technology , Delivery of Health Care , Humans , Artificial Intelligence
2.
IEEE Trans Cybern ; 54(2): 679-692, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37028043

ABSTRACT

Camera-based passive dietary intake monitoring is able to continuously capture the eating episodes of a subject, recording rich visual information, such as the type and volume of food being consumed, as well as the eating behaviors of the subject. However, there currently is no method that is able to incorporate these visual clues and provide a comprehensive context of dietary intake from passive recording (e.g., is the subject sharing food with others, what food the subject is eating, and how much food is left in the bowl). On the other hand, privacy is a major concern while egocentric wearable cameras are used for capturing. In this article, we propose a privacy-preserved secure solution (i.e., egocentric image captioning) for dietary assessment with passive monitoring, which unifies food recognition, volume estimation, and scene understanding. By converting images into rich text descriptions, nutritionists can assess individual dietary intake based on the captions instead of the original images, reducing the risk of privacy leakage from images. To this end, an egocentric dietary image captioning dataset has been built, which consists of in-the-wild images captured by head-worn and chest-worn cameras in field studies in Ghana. A novel transformer-based architecture is designed to caption egocentric dietary images. Comprehensive experiments have been conducted to evaluate the effectiveness and to justify the design of the proposed architecture for egocentric dietary image captioning. To the best of our knowledge, this is the first work that applies image captioning for dietary intake assessment in real-life settings.


Subject(s)
Eating , Privacy , Diet , Nutrition Assessment , Feeding Behavior
3.
Nutrients ; 15(18)2023 Sep 20.
Article in English | MEDLINE | ID: mdl-37764857

ABSTRACT

BACKGROUND: Accurate estimation of dietary intake is challenging. However, whilst some progress has been made in high-income countries, low- and middle-income countries (LMICs) remain behind, contributing to critical nutritional data gaps. This study aimed to validate an objective, passive image-based dietary intake assessment method against weighed food records in London, UK, for onward deployment to LMICs. METHODS: Wearable camera devices were used to capture food intake on eating occasions in 18 adults and 17 children of Ghanaian and Kenyan origin living in London. Participants were provided pre-weighed meals of Ghanaian and Kenyan cuisine and camera devices to automatically capture images of the eating occasions. Food images were assessed for portion size, energy, nutrient intake, and the relative validity of the method compared to the weighed food records. RESULTS: The Pearson and Intraclass correlation coefficients of estimates of intakes of food, energy, and 19 nutrients ranged from 0.60 to 0.95 and 0.67 to 0.90, respectively. Bland-Altman analysis showed good agreement between the image-based method and the weighed food record. Under-estimation of dietary intake by the image-based method ranged from 4 to 23%. CONCLUSIONS: Passive food image capture and analysis provides an objective assessment of dietary intake comparable to weighed food records.


Subject(s)
Eating , Food , Humans , Adult , Child , London , Ghana , Kenya
4.
Front Nutr ; 10: 1191962, 2023.
Article in English | MEDLINE | ID: mdl-37575335

ABSTRACT

Introduction: Dietary assessment is important for understanding nutritional status. Traditional methods of monitoring food intake through self-report such as diet diaries, 24-hour dietary recall, and food frequency questionnaires may be subject to errors and can be time-consuming for the user. Methods: This paper presents a semi-automatic dietary assessment tool we developed - a desktop application called Image to Nutrients (I2N) - to process sensor-detected eating events and images captured during these eating events by a wearable sensor. I2N has the capacity to offer multiple food and nutrient databases (e.g., USDA-SR, FNDDS, USDA Global Branded Food Products Database) for annotating eating episodes and food items. I2N estimates energy intake, nutritional content, and the amount consumed. The components of I2N are three-fold: 1) sensor-guided image review, 2) annotation of food images for nutritional analysis, and 3) access to multiple food databases. Two studies were used to evaluate the feasibility and usefulness of I2N: 1) a US-based study with 30 participants and a total of 60 days of data and 2) a Ghana-based study with 41 participants and a total of 41 days of data). Results: In both studies, a total of 314 eating episodes were annotated using at least three food databases. Using I2N's sensor-guided image review, the number of images that needed to be reviewed was reduced by 93% and 85% for the two studies, respectively, compared to reviewing all the images. Discussion: I2N is a unique tool that allows for simultaneous viewing of food images, sensor-guided image review, and access to multiple databases in one tool, making nutritional analysis of food images efficient. The tool is flexible, allowing for nutritional analysis of images if sensor signals aren't available.

5.
Math Biosci Eng ; 20(4): 6294-6311, 2023 Jan 31.
Article in English | MEDLINE | ID: mdl-37161107

ABSTRACT

Estimating the volume of food plays an important role in diet monitoring. However, it is difficult to perform this estimation automatically and accurately. A new method based on the multi-layer superpixel technique is proposed in this paper to avoid tedious human-computer interaction and improve estimation accuracy. Our method includes the following steps: 1) obtain a pair of food images along with the depth information using a stereo camera; 2) reconstruct the plate plane from the disparity map; 3) warp the input image and the disparity map to form a new direction of view parallel to the plate plane; 4) cut the warped image into a series of slices according to the depth information and estimate the occluded part of the food; and 5) rescale superpixels for each slice and estimate the food volume by accumulating all available slices in the segmented food region. Through a combination of image data and disparity map, the influences of noise and visual error in existing interactive food volume estimation methods are reduced, and the estimation accuracy is improved. Our experiments show that our method is effective, accurate and convenient, providing a new tool for promoting a balanced diet and maintaining health.

6.
J Diabetes Sci Technol ; 17(5): 1212-1225, 2023 09.
Article in English | MEDLINE | ID: mdl-37162011

ABSTRACT

OBJECTIVE: Dietary self-management is one key component to achieve optimal glycemic control. Advances in mobile health (mHealth) technology have reduced the burden of diabetes self-management; however, limited evidence has been known regarding the status of the current body of research using mHealth technology for dietary management for adults with type 2 diabetes. METHODS: Literature searches were conducted electronically using PubMed, CINAHL (EBSCO), Web of Science Core Collection, PsycINFO (Ovid), EMBASE (Ovid), and Scopus. Keywords and subject headings covered dietary management, type 2 diabetes, and mHealth. Inclusion criteria included studies that applied mHealth for dietary self-management for adults with type 2 diabetes and were published in English as full articles. RESULTS: This review (N = 15 studies) revealed heterogeneity of the mHealth-based dietary self-management or interventions and reported results related to physiological, dietary behavioral, and psychosocial outcomes. Twelve studies applied smartphone apps with varied functions for dietary management or intervention, while three studies applied continuous glucose monitoring (CGM) to guide dietary changes. Among 15 reviewed studies, only three of them were two-arm randomized clinical trial (RCT) with larger sample and 12-month study duration and 12 of them were pilot testing. Nine of 12 pilot studies showed improved HbA1c; most of them resulted in varied dietary changes; and few of them showed improved diabetes distress and depression. CONCLUSION: Our review provided evidence that the application of mHealth technology for dietary intervention for adults with type 2 diabetes is still in pilot testing. The preliminary effects are inconclusive on physiological, dietary behavioral, and psychosocial outcomes.


Subject(s)
Diabetes Mellitus, Type 2 , Mobile Applications , Self-Management , Telemedicine , Humans , Adult , Self-Management/methods , Diabetes Mellitus, Type 2/therapy , Telemedicine/methods , Technology , Randomized Controlled Trials as Topic
7.
Madima 23 (2023) ; 2023: 1-9, 2023 Oct.
Article in English | MEDLINE | ID: mdl-38288389

ABSTRACT

Unhealthy diet is a top risk factor causing obesity and numerous chronic diseases. To help the public adopt healthy diet, nutrition scientists need user-friendly tools to conduct Dietary Assessment (DA). In recent years, new DA tools have been developed using a smartphone or a wearable device which acquires images during a meal. These images are then processed to estimate calories and nutrients of the consumed food. Although considerable progress has been made, 2D food images lack scale reference and 3D volumetric information. In addition, food must be sufficiently observable from the image. This basic condition can be met when the food is stand-alone (no food container is used) or it is contained in a shallow plate. However, the condition cannot be met easily when a bowl is used. The food is often occluded by the bowl edge, and the shape of the bowl may not be fully determined from the image. However, bowls are the most utilized food containers by billions of people in many parts of the world, especially in Asia and Africa. In this work, we propose to premeasure plates and bowls using a marked adhesive strip before a dietary study starts. This simple procedure eliminates the use of a scale reference throughout the DA study. In addition, we use mathematical models and image processing to reconstruct the bowl in 3D. Our key idea is to estimate how full the bowl is rather than how much food is (in either volume or weight) in the bowl. This idea reduces the effect of occlusion. The experimental data have shown satisfactory results of our methods which enable accurate DA studies using both plates and bowls with reduced burden on research participants.

8.
Sensors (Basel) ; 22(20)2022 Oct 20.
Article in English | MEDLINE | ID: mdl-36298356

ABSTRACT

An unhealthy diet is strongly linked to obesity and numerous chronic diseases. Currently, over two-thirds of American adults are overweight or obese. Although dietary assessment helps people improve nutrition and lifestyle, traditional methods for dietary assessment depend on self-report, which is inaccurate and often biased. In recent years, as electronics, information, and artificial intelligence (AI) technologies advanced rapidly, image-based objective dietary assessment using wearable electronic devices has become a powerful approach. However, research in this field has been focused on the developments of advanced algorithms to process image data. Few reports exist on the study of device hardware for the particular purpose of dietary assessment. In this work, we demonstrate that, with the current hardware design, there is a considerable risk of missing important dietary data owing to the common use of rectangular image screen and fixed camera orientation. We then present two designs of a new camera system to reduce data loss by generating circular images using rectangular image sensor chips. We also present a mechanical design that allows the camera orientation to be adjusted, adapting to differences among device wearers, such as gender, body height, and so on. Finally, we discuss the pros and cons of rectangular versus circular images with respect to information preservation and data processing using AI algorithms.


Subject(s)
Nutrition Assessment , Wearable Electronic Devices , Adult , Humans , Artificial Intelligence , Diet , Algorithms
9.
Public Health Nutr ; : 1-11, 2022 May 26.
Article in English | MEDLINE | ID: mdl-35616087

ABSTRACT

OBJECTIVE: Passive, wearable sensors can be used to obtain objective information in infant feeding, but their use has not been tested. Our objective was to compare assessment of infant feeding (frequency, duration and cues) by self-report and that of the Automatic Ingestion Monitor-2 (AIM-2). DESIGN: A cross-sectional pilot study was conducted in Ghana. Mothers wore the AIM-2 on eyeglasses for 1 d during waking hours to assess infant feeding using images automatically captured by the device every 15 s. Feasibility was assessed using compliance with wearing the device. Infant feeding practices collected by the AIM-2 images were annotated by a trained evaluator and compared with maternal self-report via interviewer-administered questionnaire. SETTING: Rural and urban communities in Ghana. PARTICIPANTS: Participants were thirty eight (eighteen rural and twenty urban) breast-feeding mothers of infants (child age ≤7 months). RESULTS: Twenty-five mothers reported exclusive breast-feeding, which was common among those < 30 years of age (n 15, 60 %) and those residing in urban communities (n 14, 70 %). Compliance with wearing the AIM-2 was high (83 % of wake-time), suggesting low user burden. Maternal report differed from the AIM-2 data, such that mothers reported higher mean breast-feeding frequency (eleven v. eight times, P = 0·041) and duration (18·5 v. 10 min, P = 0·007) during waking hours. CONCLUSION: The AIM-2 was a feasible tool for the assessment of infant feeding among mothers in Ghana as a passive, objective method and identified overestimation of self-reported breast-feeding frequency and duration. Future studies using the AIM-2 are warranted to determine validity on a larger scale.

10.
Sensors (Basel) ; 22(4)2022 Feb 15.
Article in English | MEDLINE | ID: mdl-35214399

ABSTRACT

Knowing the amounts of energy and nutrients in an individual's diet is important for maintaining health and preventing chronic diseases. As electronic and AI technologies advance rapidly, dietary assessment can now be performed using food images obtained from a smartphone or a wearable device. One of the challenges in this approach is to computationally measure the volume of food in a bowl from an image. This problem has not been studied systematically despite the bowl being the most utilized food container in many parts of the world, especially in Asia and Africa. In this paper, we present a new method to measure the size and shape of a bowl by adhering a paper ruler centrally across the bottom and sides of the bowl and then taking an image. When observed from the image, the distortions in the width of the paper ruler and the spacings between ruler markers completely encode the size and shape of the bowl. A computational algorithm is developed to reconstruct the three-dimensional bowl interior using the observed distortions. Our experiments using nine bowls, colored liquids, and amorphous foods demonstrate high accuracy of our method for food volume estimation involving round bowls as containers. A total of 228 images of amorphous foods were also used in a comparative experiment between our algorithm and an independent human estimator. The results showed that our algorithm overperformed the human estimator who utilized different types of reference information and two estimation methods, including direct volume estimation and indirect estimation through the fullness of the bowl.


Subject(s)
Diet , Energy Intake , Algorithms , Food , Humans , Smartphone
11.
Electronics (Basel) ; 10(13)2021 Jul.
Article in English | MEDLINE | ID: mdl-34552763

ABSTRACT

It is well known that many chronic diseases are associated with unhealthy diet. Although improving diet is critical, adopting a healthy diet is difficult despite its benefits being well understood. Technology is needed to allow an assessment of dietary intake accurately and easily in real-world settings so that effective intervention to manage being overweight, obesity, and related chronic diseases can be developed. In recent years, new wearable imaging and computational technologies have emerged. These technologies are capable of performing objective and passive dietary assessments with a much simplified procedure than traditional questionnaires. However, a critical task is required to estimate the portion size (in this case, the food volume) from a digital image. Currently, this task is very challenging because the volumetric information in the two-dimensional images is incomplete, and the estimation involves a great deal of imagination, beyond the capacity of the traditional image processing algorithms. In this work, we present a novel Artificial Intelligent (AI) system to mimic the thinking of dietitians who use a set of common objects as gauges (e.g., a teaspoon, a golf ball, a cup, and so on) to estimate the portion size. Specifically, our human-mimetic system "mentally" gauges the volume of food using a set of internal reference volumes that have been learned previously. At the output, our system produces a vector of probabilities of the food with respect to the internal reference volumes. The estimation is then completed by an "intelligent guess", implemented by an inner product between the probability vector and the reference volume vector. Our experiments using both virtual and real food datasets have shown accurate volume estimation results.

12.
Front Artif Intell ; 4: 644712, 2021.
Article in English | MEDLINE | ID: mdl-33870184

ABSTRACT

Malnutrition, including both undernutrition and obesity, is a significant problem in low- and middle-income countries (LMICs). In order to study malnutrition and develop effective intervention strategies, it is crucial to evaluate nutritional status in LMICs at the individual, household, and community levels. In a multinational research project supported by the Bill & Melinda Gates Foundation, we have been using a wearable technology to conduct objective dietary assessment in sub-Saharan Africa. Our assessment includes multiple diet-related activities in urban and rural families, including food sources (e.g., shopping, harvesting, and gathering), preservation/storage, preparation, cooking, and consumption (e.g., portion size and nutrition analysis). Our wearable device ("eButton" worn on the chest) acquires real-life images automatically during wake hours at preset time intervals. The recorded images, in amounts of tens of thousands per day, are post-processed to obtain the information of interest. Although we expect future Artificial Intelligence (AI) technology to extract the information automatically, at present we utilize AI to separate the acquired images into two binary classes: images with (Class 1) and without (Class 0) edible items. As a result, researchers need only to study Class-1 images, reducing their workload significantly. In this paper, we present a composite machine learning method to perform this classification, meeting the specific challenges of high complexity and diversity in the real-world LMIC data. Our method consists of a deep neural network (DNN) and a shallow learning network (SLN) connected by a novel probabilistic network interface layer. After presenting the details of our method, an image dataset acquired from Ghana is utilized to train and evaluate the machine learning system. Our comparative experiment indicates that the new composite method performs better than the conventional deep learning method assessed by integrated measures of sensitivity, specificity, and burden index, as indicated by the Receiver Operating Characteristic (ROC) curve.

13.
Public Health Nutr ; 24(6): 1248-1255, 2021 04.
Article in English | MEDLINE | ID: mdl-32854804

ABSTRACT

OBJECTIVE: Accurate measurements of food volume and density are often required as 'gold standards' for calibration of image-based dietary assessment and food database development. Currently, there is no specialised laboratory instrument for these measurements. We present the design of a new volume of density (VD) meter to bridge this technological gap. DESIGN: Our design consists of a turntable, a load sensor, a set of cameras and lights installed on an arc-shaped stationary support, and a microcomputer. It acquires an array of food images, reconstructs a 3D volumetric model, weighs the food and calculates both food volume and density, all in an automatic process controlled by the microcomputer. To adapt to the complex shapes of foods, a new food surface model, derived from the electric field of charged particles, is developed for 3D point cloud reconstruction of either convex or concave food surfaces. RESULTS: We conducted two experiments to evaluate the VD meter. The first experiment utilised computer-synthesised 3D objects with prescribed convex and concave surfaces of known volumes to investigate different food surface types. The second experiment was based on actual foods with different shapes, colours and textures. Our results indicated that, for synthesised objects, the measurement error of the electric field-based method was <1 %, significantly lower compared with traditional methods. For real-world foods, the measurement error depended on the types of food volumes (detailed discussion included). The largest error was approximately 5 %. CONCLUSION: The VD meter provides a new electronic instrument to support advanced research in nutrition science.


Subject(s)
Electronics , Food , Calibration , Humans
14.
J Acad Nutr Diet ; 120(7): 1119-1132, 2020 07.
Article in English | MEDLINE | ID: mdl-32280056

ABSTRACT

BACKGROUND: Food preparation interventions are an increasingly popular target for hands-on nutrition education for adults, children, and families, but assessment tools are lacking. Objective data on home cooking practices, and how they are interpreted through different data collection methods, are needed. OBJECTIVE: The goal of this study was to explore the utility of the Healthy Cooking Index in coding multiple types of home food preparation data and elucidating healthy cooking behavior patterns. DESIGN: Parent-child dyads were recruited between October 2017 and June 2018 in Houston and Austin, Texas for this observational study. Food preparation events were observed and video recorded. Participants also wore a body camera (eButton) and completed a questionnaire during the same event. PARTICIPANTS/SETTING: Parents with a school-aged child were recruited as dyads (n=40). Data collection procedures took place in participant homes during evening meal preparation events. MAIN OUTCOME MEASURES: Food preparation data were collected from parents through direct observation during preparation as well as eButton and paper questionnaires completed immediately after the event. STATISTICAL ANALYSES PERFORMED: All data sets were analyzed using the Healthy Cooking Index coding system and compared for concordance. A paired sample t test was used to examine significant differences between the scores. Cronbach's α and principal components analysis were conducted on the observed Healthy Cooking Index items to examine patterns of cooking practices. RESULTS: Two main components of cooking practices emerged from the principal components analysis: one focused on meat products and another on health and taste enhancing practices. The eButton was more accurate in collecting Healthy Cooking Index practices than the self-report questionnaire. Significant differences were found between participant reported and observed summative Healthy Cooking Index scores (P<0.001), with no significant differences between scores computed from eButton images and observations (P=0.187). CONCLUSIONS: This is the first study to examine nutrition optimizing home cooking practices by observational, wearable camera and self-report data collection methods. By strengthening cooking behavior assessment tools, future research will be able to elucidate the transmission of cooking education through interventions and the relationships between cooking practices, disease prevention, and health.


Subject(s)
Cooking/methods , Diet, Healthy/methods , Meals , Parents , Adolescent , Body Mass Index , Child , Child, Preschool , Female , Health Promotion/methods , Health Status , Humans , Male , Meat , Nutritional Sciences/education , Self Report , Surveys and Questionnaires , Taste , Video Recording
15.
Article in English | MEDLINE | ID: mdl-32191886

ABSTRACT

Semantic segmentation is a key step in scene understanding for autonomous driving. Although deep learning has significantly improved the segmentation accuracy, current highquality models such as PSPNet and DeepLabV3 are inefficient given their complex architectures and reliance on multi-scale inputs. Thus, it is difficult to apply them to real-time or practical applications. On the other hand, existing real-time methods cannot yet produce satisfactory results on small objects such as traffic lights, which are imperative to safe autonomous driving. In this paper, we improve the performance of real-time semantic segmentation from two perspectives, methodology and data. Specifically, we propose a real-time segmentation model coined Narrow Deep Network (NDNet) and build a synthetic dataset by inserting additional small objects into the training images. The proposed method achieves 65.7% mean intersection over union (mIoU) on the Cityscapes test set with only 8.4G floatingpoint operations (FLOPs) on 1024×2048 inputs. Furthermore, by re-training the existing PSPNet and DeepLabV3 models on our synthetic dataset, we obtained an average 2% mIoU improvement on small objects.

16.
Curr Dev Nutr ; 4(2): nzaa020, 2020 Feb.
Article in English | MEDLINE | ID: mdl-32099953

ABSTRACT

Malnutrition is a major concern in low- and middle-income countries (LMIC), but the full extent of nutritional deficiencies remains unknown largely due to lack of accurate assessment methods. This study seeks to develop and validate an objective, passive method of estimating food and nutrient intake in households in Ghana and Uganda. Household members (including under-5s and adolescents) are assigned a wearable camera device to capture images of their food intake during waking hours. Using custom software, images captured are then used to estimate an individual's food and nutrient (i.e., protein, fat, carbohydrate, energy, and micronutrients) intake. Passive food image capture and assessment provides an objective measure of food and nutrient intake in real time, minimizing some of the limitations associated with self-reported dietary intake methods. Its use in LMIC could potentially increase the understanding of a population's nutritional status, and the contribution of household food intake to the malnutrition burden. This project is registered at clinicaltrials.gov (NCT03723460).

17.
Front Nutr ; 7: 519444, 2020.
Article in English | MEDLINE | ID: mdl-33521029

ABSTRACT

Despite the extreme importance of food intake in human health, it is currently difficult to conduct an objective dietary assessment without individuals' self-report. In recent years, a passive method utilizing a wearable electronic device has emerged. This device acquires food images automatically during the eating process. These images are then analyzed to estimate intakes of calories and nutrients, assisted by advanced computational algorithms. Although this passive method is highly desirable, it has been thwarted by the requirement of a fiducial marker which must be present in the image for a scale reference. The importance of this scale reference is analogous to the importance of the scale bar in a map which determines distances or areas in any geological region covered by the map. Likewise, the sizes or volumes of arbitrary foods on a dining table covered by an image cannot be determined without the scale reference. Currently, the fiducial marker (often a checkerboard card) serves as the scale reference which must be present on the table before taking pictures, requiring human efforts to carry, place and retrieve the fiducial marker manually. In this work, we demonstrate that the fiducial marker can be eliminated if an individual's dining location is fixed and a one-time calibration using a circular plate of known size is performed. When the individual uses another circular plate of an unknown size, our algorithm estimates its radius using the range of pre-calibrated distances between the camera and the plate from which the desired scale reference is determined automatically. Our comparative experiment indicates that the mean absolute percentage error of the proposed estimation method is ~10.73%. Although this error is larger than that of the manual method of 6.68% using a fiducial marker on the table, the new method has a distinctive advantage of eliminating the manual procedure and automatically generating the scale reference.

18.
Adv Wound Care (New Rochelle) ; 9(1): 28-33, 2020 01 01.
Article in English | MEDLINE | ID: mdl-31871828

ABSTRACT

Objective: The objective of this prospective clinical study was to validate two prototype pressure ulcer monitoring platform (PUMP) devices, (PUMP1 and PUMP2), to promote optimal bed repositioning of hospitalized patients to prevent pressure ulcers (PUs). Approach: PUMP1 was a wearable electronic device attached to the patient gown with no skin contact. PUMP2 was a set of four identical electronic devices placed under the patient's bed wheels. A video camera recorded events in the patient room while measurements from the PUMP devices were correlated with true patient repositioning activity. The performance of these PUMP devices developed by our research team were evaluated and compared by both clinicians and engineers. Results: Ten mobility-restricted patients were enrolled into the study. Repositioning movement was recorded by both PUMP devices for 10 ± 2 h and corroborated with video capture. One hundred thirty-seven movements in total were detected by both PUMP1 and PUMP2 over 105 h of capture. Two false positives were detected by the sensors and 11 movements were missed by the sensors. PUMP1 and PUMP2 never conflicted in data collection. Innovation: The presented study evaluated two different sensors' abilities to capture accurate patient repositioning to eventually prevent PU formation. Importantly, detection of patient motion was completed without contact to patient skin. Conclusion: The clinical study demonstrated successful capture of patient repositioning movement by both PUMP1 and PUMP2 devices with 85% reliability, 2 false positives, and 11 missed movements. In future studies, the PUMP devices will be combined with a SMS-based mobile phone alert system to improve caregiver repositioning behavior.


Subject(s)
Monitoring, Physiologic/instrumentation , Moving and Lifting Patients/methods , Patient Positioning/instrumentation , Pressure Ulcer/prevention & control , Adult , Aged , Clinical Trials as Topic , Equipment Design/methods , Female , Humans , Male , Middle Aged , Movement/physiology , Prospective Studies , Reproducibility of Results , Wearable Electronic Devices/standards
19.
EURASIP J Adv Signal Process ; 2019(1): 14, 2019.
Article in English | MEDLINE | ID: mdl-30881444

ABSTRACT

Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its widespread applicability to human systems, including the evaluation of dietary and physical activity and the monitoring of patients and older adults. In this paper, we present a knowledge-driven multisource fusion framework for the recognition of egocentric activities in daily living (ADL). This framework employs Dezert-Smarandache theory across three information sources: the wearer's knowledge, images acquired by a wearable camera, and sensor data from wearable inertial measurement units and GPS. A simple likelihood table is designed to provide routine ADL information for each individual. A well-trained convolutional neural network is then used to produce a set of textual tags that, along with routine information and other sensor data, are used to recognize ADLs based on information theory-based statistics and a support vector machine. Our experiments show that the proposed method accurately recognizes 15 predefined ADL classes, including a variety of sedentary activities that have previously been difficult to recognize. When applied to real-life data recorded using a self-constructed wearable device, our method outperforms previous approaches, and an average accuracy of 85.4% is achieved for the 15 ADLs.

20.
Sensors (Basel) ; 19(3)2019 Jan 28.
Article in English | MEDLINE | ID: mdl-30696100

ABSTRACT

Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its wide applicability in medical care, smart homes, and security monitoring. In this study, we developed and implemented a deep-learning-based hierarchical fusion framework for the recognition of egocentric activities of daily living (ADLs) in a wearable hybrid sensor system comprising motion sensors and cameras. Long short-term memory (LSTM) and a convolutional neural network are used to perform egocentric ADL recognition based on motion sensor data and photo streaming in different layers, respectively. The motion sensor data are used solely for activity classification according to motion state, while the photo stream is used for further specific activity recognition in the motion state groups. Thus, both motion sensor data and photo stream work in their most suitable classification mode to significantly reduce the negative influence of sensor differences on the fusion results. Experimental results show that the proposed method not only is more accurate than the existing direct fusion method (by up to 6%) but also avoids the time-consuming computation of optical flow in the existing method, which makes the proposed algorithm less complex and more suitable for practical application.

SELECTION OF CITATIONS
SEARCH DETAIL
...